11 research outputs found

    Uncertainty-aware video visual analytics of tracked moving objects

    Get PDF
    Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration hypotheses generation and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making we gather uncertainties introduced by the computer vision step communicate these information to users through uncertainty visualization and grant fuzzy hypothesis formulation to interact with the machine. Finally we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009

    Uncertainty-aware video visual analytics of tracked moving objects

    No full text
    Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009

    Uncertainty-aware video visual analytics of tracked moving objects

    Full text link

    Auditory Support for Situation Awareness in Video Surveillance

    Get PDF
    Presented at the 18th International Conference on Auditory Display (ICAD2012) on June 18-21, 2012 in Atlanta, Georgia.Reprinted by permission of the International Community for Auditory Display, http://www.icad.org.We introduce a parameter mapping sonification to support situational awareness of surveillance operators during their task of monitoring video data. The presented auditory display produces a continuous ambient soundscape reflecting the changes in video data. For this purpose, we use low-level computer vision techniques, such as optical-flow extraction and background subtraction, and rely on the capabilities of the human auditory system for high-level recognition. Special focus is put on the mapping between video features and sound parameters. We optimize this mapping to provide a good interpretability of the sound pattern, as well as an aesthetic non-obtrusive sonification: precision of the conveyed information, psychoacoustic capabilities of the auditory system, and aesthetical guidelines of sound design are considered by optimally balancing the mapping parameters using gradient descent. A user study evaluates the capabilities and limitations of the presented sonification, as well as its applicability to supporting situational awareness in surveillance scenarios.This work was funded by German Research Foundation (DFG) as part of the Priority Program “Scalable Visual Analytics” (SPP 1335)

    Selection of an Optimal Set of Discriminative and Robust Local Features with Application to Traffic Sign Recognition

    Get PDF
    Today, discriminative local features are widely used in different fields of computer vision. Due to their strengths, discriminative local features were recently applied to the problem of traffic sign recognition (TSR). First of all, we discuss how discriminative local features are applied to TSR and which problems arise in this specific domain. Since TSR has to cope with highly structured and symmetrical objects, which are often captured at low resolution, only a small number of features can be matched correctly. To alleviate these issues, we provide an approach for the selection of discriminative and robust features to increase the matching performance by speed, recall, and precision. Contrary to recent techniques that solely rely on density estimation in feature space to select highly discriminative features, we additionally address the question of features’ retrievability and positional stability under scale changes as well as their reliability to viewpoint variations. Finally, we combine the proposed methods to obtain a small set of robust features that have excellent matching properties

    Handbook on "Background Modeling and Foreground Detection for Video Surveillance"

    No full text
    International audienceThis handbook solicited contributions to address these wide range of challenges met in background modeling and foreground detection for video-surveillance. Thus, it groups the works of the leading teams in this field over the recent years. By incorporating both existing and new ideas, this handbook gives a complete overview of the concepts, theories, algorithms, and applications related to background modeling and foreground detection. First, an introduction to background modeling and foreground detection for beginners is provided by surveying statistical models, clustering models, neural networks and fuzzy models. Furthermore, leading methods and algorithms for detecting moving objects in video surveillance are presented. A description of recent complete datasets and codes are given. Moreover, an accompanying website is provided. Finally, with this handbook, we aim to bring a one-stop solution, i.e., access to a number of different models, algorithms, implementations and benchmarking techniques in a single volume. The handbook consists of five parts.This website contains the list of chapters, their abstract and links to the demos. It allows the reader to have a quick access to the main resources, datasets and codes in the field
    corecore